27 research outputs found

    Automatic systems diagnosis without behavioral models

    Get PDF
    Recent feedback obtained while applying Model-based diagnosis (MBD) in industry suggests that the costs involved in behavioral modeling (both expertise and labor) can outweigh the benefits of MBD as a high-performance diagnosis approach. In this paper, we propose an automatic approach, called ANTARES, that completely avoids behavioral modeling. Decreasing modeling sacrifices diagnostic accuracy, as the size of the ambiguity group (i.e., components which cannot be discriminated because of the lack of information) increases, which in turn increases misdiagnosis penalty. ANTARES further breaks the ambiguity group size by considering the component's false negative rate (FNR), which is estimated using an analytical expression. Furthermore, we study the performance of ANTARES for a number of logic circuits taken from the 74XXX/ISCAS benchmark suite. Our results clearly indicate that sacrificing modeling information degrades the diagnosis quality. However, considering FNR information improves the quality, attaining the diagnostic performance of an MBD approach

    The delft MS curriculum on embedded systems

    Full text link

    Predicting Contention in Distributed-Memory Machines

    No full text
    A compile-time prediction technique is outlined that yields low-cost, highly symbolic performance models, to be used during the initial optimization loops in parallel system design. Aimed to provide an acceptable accuracy across a large parameter search space the approach is based on extending conventional static analysis with asymptotic queueing analysis in order to account for potentially dominating effects of resource contention. In this paper we report on the accuracy of the prediction method when compared to simulation results as well as compared to actual measurement results on a distributed-memory machine

    Mapping Unstructured Applications into Nested Parallelism

    No full text
    Abstract. Nested parallel programming models, where the task graph associated to a computation is series-parallel are easy to program and show good analysis properties. These can be exploited for efficient scheduling, accurate cost estimation or automatic mapping to different architectures. Restricting synchronization structures to nested series-parallelism may bring performance losses due to a less parallel solution, as compared to more generic ones based in unstructured models (e.g. message passing). A new algorithmic technique is presented which allows automatic transformation of the task graph of any unstructured application to a seriesparallel form (nested-parallelism). The tool is applied to random and irregular application task graphs to investigate the potential performance degradation when conveying them into series-parallel form. Results show that a wide range of irregular applications can be expressed using a structured coordination model with a small loss of parallelism. 1 Introduction A common practice in high-performance computing is to program applications in terms of the low-level concurrent programming model provided by the target machine, trying to exploit its maximum possible performance. Portable APIs, such as message passing interfaces (e.g. MPI, PVM) propose an abstraction of the machine architecture, still obtaining good performance. However, all these unrestricted coordination models can be extremely error-prone and inefficient, as the synchronization dependencies that a program can generate are complex and difficult to analyze by humans or compilers. Moreover, important implementation decisions such as scheduling or data-layout become extremely difficult to optimize. Facing all these shortcomings, more abstract parallel programming models have been proposed and studied which restrict the possible synchronization and communication structures available to the programmer (see e.g. [1]). Restricte

    Measuring the Performance Impact of SP-restricted Programming in Shared-Memory Machines

    No full text
    Abstract. A number of interesting properties for scheduling and/or costestimation arise when using parallel programming models that restrict the topology of a program's task graph to an SP (series-parallel) form. Acritical question however, is to what extent the ability to exploit parallelism is compromised when only SP coordination structures are allowed.This paper presents new application parameters which are key factors to predict this loss of parallelism at both language modeling and programexecution levels, for shared-memory architectures. Our results indicate that a wide range of parallel computations can be expressed using astructured coordination model with a loss of parallelism that is small and predictable. 1 Introduction In high-performance computing currently the only programming methods thatare typically used to deliver the huge potential of high-performance parallel machines are methods that rely on the use of either the data-parallel (vector)programming model or simply the native message-passing model. Given current compiler technology, unfortunately, these programming models still expose thehigh sensitivity of machine performance on programming decisions made by the user. As a result, the user is still confronted with complex optimizationissues such as computation vectorization, communication pipelining, and, most notably, code and data partitioning. Consequently, a program, once mappedto a particular target machine is far from portable unless one accepts a high probability of dramatic performance loss
    corecore